The case of Bloomberg reporter Jennah Haque illustrates how far the problem has already gone. A few months ago, she received a welcome package from Ultimate Medical Academy in Tampa, including an XL men’s polo shirt. The problem: she had never applied.

Someone had submitted 13 college applications in her name and filed several student aid requests, potentially seeking more than $50,000 in student loans. Her name, date of birth, address and Social Security number were all correct. Only one detail was wrong: the high school listed in Alabama was a school she had never attended.

Generative AI Is Changing the Scale of Identity Fraud

Haque’s case is not an isolated incident. According to her Bloomberg investigation, identity theft powered by generative AI has reached a new level.

Data from the Identity Theft Resource Center shows that 2025 saw the highest number of data compromises since the organization began tracking them in 2005. Increasingly, specialized AI tools and deepfakes are being used in these attacks.

Michael Bruemmer, Vice President of Consumer Protection at Experian, one of the three major U.S. credit reporting agencies, said that 40% of the 5,000 data breaches his team handled for affected companies last year involved AI support. Experian expects agentic AI — autonomous systems operating with limited human oversight — to become the main driver of identity fraud in 2026.

From Dark Web Searches to Deepfake IDs

According to Bloomberg, the tools used by fraudsters are already highly advanced.

One example is FraudGPT, a language model trained on breach data. Such tools can test hundreds of thousands of Social Security numbers within minutes until they find a valid combination linked to low account activity.

But the bigger leap comes from agentic systems that can connect several steps automatically. One set of sub-agents can search the dark web for usable personal data. Others can contact multiple banks under different identities. Another group can automatically complete complex government forms for loan applications.

A U.S. student aid employee reportedly told Haque that the large number of college applications filed in such a short period was difficult to explain without AI assistance.

How the “Bust-Out” Fraud Scheme Works

Naureen Ali, U.S. Head of Fraud at TransUnion, describes a common pattern known as a bust-out scheme.

Fraudsters first open small credit lines at local banks. Later, they apply for larger lines of credit at institutional lenders. Physical identity documents may then be submitted using deepfake driver’s licenses. Once the accounts are approved, criminals max out credit cards and drain accounts.

Ali estimates global fraud losses at more than $534 billion per year, although that figure does not specify how much of the total is directly linked to AI.

Why AI Makes Fraud Harder to Detect

The main danger is not only that AI helps criminals work faster. It also makes fraud more convincing.

Bruemmer summarized the problem clearly: AI makes attacks faster, more sophisticated and visually more believable. Phishing emails are now difficult for most recipients to identify as fake.

Tamás Kádár, CEO of fraud prevention company SEON, said criminals can now build complete phishing websites without writing a single line of code.

This changes the threat landscape. Fraud no longer requires the same level of technical skill. AI lowers the barrier for criminals while increasing the scale and quality of attacks.

Can AI Help Stop AI-Driven Fraud?

Experts interviewed in the Bloomberg report say the main defense against this new wave of fraud may also be AI.

TransUnion uses automated liveness checks to detect AI-generated selfies. SEON analyzes transactions using its own risk-scoring systems to identify suspicious behavior.

For individuals, the basic protection measures remain important:

  • freezing credit reports;
  • using multifactor authentication;
  • switching to passkeys where possible;
  • avoiding public Wi-Fi without a VPN;
  • monitoring bank and credit accounts regularly;
  • being cautious with unexpected emails, forms and application confirmations.

Expert Takeaway

AI is turning identity theft from a manual crime into an automated, scalable operation. What once required stolen documents, technical skill and repeated human effort can now be partly delegated to generative models, deepfake tools and autonomous agents.

The biggest risk is that fraudsters can combine stolen personal data, fake documents and automated application systems into one coordinated workflow. This makes attacks faster, harder to detect and more damaging for victims.

For consumers, traditional protection tools are still necessary, but no longer sufficient on their own. Credit freezes, multifactor authentication and passkeys should become default habits. For companies and financial institutions, AI-based fraud detection is no longer optional — it is becoming a basic requirement for survival in the next phase of digital identity crime.